Goto

Collaborating Authors

 smooth uct


Heinrich

AAAI Conferences

Self-play Monte Carlo Tree Search (MCTS) has been successful in many perfect-information two-player games. Although these methods have been extended to imperfect-information games, so far they have not achieved the same level of practical success or theoretical convergence guarantees as competing methods. In this paper we introduce Smooth UCT, a variant of the established Upper Confidence Bounds Applied to Trees (UCT) algorithm. Smooth UCT agents mix in their average policy during self-play and the resulting planning process resembles game-theoretic fictitious play. When applied to Kuhn and Leduc poker, Smooth UCT approached a Nash equilibrium, whereas UCT diverged. In addition, Smooth UCT outperformed UCT in Limit Texas Hold'em and won 3 silver medals in the 2014 Annual Computer Poker Competition.


Smooth UCT Search in Computer Poker

Heinrich, Johannes (University College London) | Silver, David (Google DeepMind)

AAAI Conferences

They concluded that UCT quickly finds Self-play Monte Carlo Tree Search (MCTS) has a good but suboptimal policy, while Outcome Sampling initially been successful in many perfect-information twoplayer learns more slowly but converges to the optimal policy games. Although these methods have been over time. In this paper, we address the question whether the extended to imperfect-information games, so far inability of UCT to converge to a Nash equilibrium can be they have not achieved the same level of practical overcome while retaining UCT's fast initial learning rate. We success or theoretical convergence guarantees focus on the full-game MCTS setting, which is an important as competing methods. In this paper we step towards developing sound variants of online MCTS in introduce Smooth UCT, a variant of the established imperfect-information games. Upper Confidence Bounds Applied to Trees In particular, we introduce Smooth UCT, which combines (UCT) algorithm.